Concordia University at the TREC 15 QA Track

نویسندگان

  • Leila Kosseim
  • Alex Beaudoin
  • Abolfazl Keighobadi Lamjiri
  • Majid Razmara
چکیده

In this paper, we describe the system we used for the TREC Question Answering Track. For factoid and list questions two different approaches were exploited: A redundancy-based approach using a modified version of aranea and a parse-tree based unifier. The modified version of aranea essentially uses Google snippets for extracting answers and then projects them to aquaint. The parse-tree based unifier is a linguistic-based approach that chunks candidate sentences syntactically and uses a heuristic measure to compute the similarity of each chunk in a candidate to its counterpart in the question. To answer other types of questions, our system extracts from Wikipedia articles a list of interest-marking terms related to the topic and uses them to extract and score sentences from the aquaint document collection using various interest-marking triggers. We submitted 3 runs using different variations of the system. In the factoid run, the average of our 3 runs is 0.202, for the list, we achieved an average of 0.084, and for the “Other”, we achieved an average F-score of 0.192.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Concordia University at the TREC 2007 QA Track

In this paper, we describe the system we used for the trec-2007 Question Answering Track. For factoid questions our redundancy-based approach using a modified version of aranea was enhanced further. Our list question answerer uses a clustering method to group the candidate answers that co-occur together. It also uses the target and question keywords as spies to pinpoint the right cluster of can...

متن کامل

TREC 2006 at Maryland: Blog, Enterprise, Legal and QA Tracks

In TREC 2006, teams from the University of Maryland participated in the Blog track, the Expert Search task of the Enterprise track, the Complex Interactive Question Answering task of the Question Answering track, and the Legal track. This paper reports our results.

متن کامل

THUIR at TREC 2004: QA

In this paper, we describe ideas and related experiments of Tsinghua University IR group in TREC 2004 QA track. In this track, our system consists three components: Question analysis, Information retrieval, and Answer extraction. Question analysis component extracts Query Term and answer type. Information retrieval component retrieves candidate documents from index set based on paragraph level ...

متن کامل

A Hybrid Approach for QA Track Definitional Questions

We present an overview of DefScriber, a system developed at Columbia University that combines knowledge-based and statistical methods to answer definitional questions of the form, “What is X?” We discuss how DefScriber was applied to the definition questions in the TREC 2003 QA track main task. We conclude with an analysis of our system’s results on the definition questions.1

متن کامل

The University of Amsterdam at the TREC 2007 QA Track

In our participation in the TREC 2007 Question Answering (QA) track, we focused on three tasks. First, we processed the new blog corpus and converted it to formats which could be used by our QA system. Second, we rewrote the module interface code in Java in order to improve the maintainability of the system. And third, we added a new table stream which has learned associations between question ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2006